28 research outputs found

    Quasar microlensing light curve analysis using deep machine learning

    Get PDF
    We introduce a deep machine learning approach to studying quasar microlensing light curves for the first time by analyzing hundreds of thousands of simulated light curves with respect to the accretion disc size and temperature profile. Our results indicate that it is possible to successfully classify very large numbers of diverse light curve data and measure the accretion disc structure. The detailed shape of the accretion disc brightness profile is found to play a negligible role, in agreement with Mortonson et al. (2005). The speed and efficiency of our deep machine learning approach is ideal for quantifying physical properties in a `big-data' problem setup. This proposed approach looks promising for analyzing decade-long light curves for thousands of microlensed quasars, expected to be provided by the Large Synoptic Survey Telescope.Comment: 11 pages, 7 figures, accepted for publication in MNRA

    A new parameter space study of cosmological microlensing

    Full text link
    Cosmological gravitational microlensing is a useful technique for understanding the structure of the inner parts of a quasar, especially the accretion disk and the central supermassive black hole. So far, most of the cosmological microlensing studies have focused on single objects from ~90 currently known lensed quasars. However, present and planned all-sky surveys are expected to discover thousands of new lensed systems. Using a graphics processing unit (GPU) accelerated ray-shooting code, we have generated 2550 magnification maps uniformly across the convergence ({\kappa}) and shear ({\gamma}) parameter space of interest to microlensing. We examine the effect of random realizations of the microlens positions on map properties such as the magnification probability distribution (MPD). It is shown that for most of the parameter space a single map is representative of an average behaviour. All of the simulations have been carried out on the GPU-Supercomputer for Theoretical Astrophysics Research (gSTAR).Comment: 16 pages, 10 figures, accepted for publication in MNRA

    A Quasar Microlensing Light Curve Generator for LSST

    Get PDF
    We present a tool to generate mock quasar microlensing light curves and sample them according to any observing strategy. An updated treatment of the fixed and random velocity components of observer, lens, and source is used, together with a proper alignment with the external shear defining the magnification map caustic orientation. Our tool produces quantitative results on high magnification events and caustic crossings, which we use to study three lensed quasars known to display microlensing, viz. RX J1131-1231, HE 0230-2130, and Q 2237+0305, as they would be monitored by The Rubin Observatory Legacy Survey of Space and Time (LSST). We conclude that depending on the location on the sky, the lens and source redshift, and the caustic network density, the microlensing variability may deviate significantly than the expected \sim20-year average time scale (Mosquera & Kochanek 2011, arXiv:1104.2356). We estimate that 300\sim300 high magnification events with Δ\Deltamag>1>1 mag could potentially be observed by LSST each year. The duration of the majority of high magnification events is between 10 and 100 days, requiring a very high cadence to capture and resolve them. Uniform LSST observing strategies perform the best in recovering microlensing high magnification events. Our web tool can be extended to any instrument and observing strategy, and is freely available as a service at http://gerlumph.swin.edu.au/tools/lsst_generator/, along with all the related code.Comment: 10 pages, 6 figures, 2 tables. Published in MNRAS. Updated Table

    Microlensing flux ratio predictions for Euclid

    Get PDF
    Quasar microlensing flux ratios are used to unveil properties of the lenses in large collections of lensed quasars, like the ones expected to be produced by the Euclid survey. This is achieved by using the direct survey products, without any (expensive) follow-up observations or monitoring. First, the theoretical flux ratio distribution of samples of hundreds of mock quasar lenses is calculated for different Initial Mass Functions (IMFs) and Sersic radial profiles for the lens compact matter distribution. Then, mock observations are created and compared to the models to recover the underlying one. The most important factor for determining the flux ratio properties of such samples is the value of the smooth matter fraction at the location of the multiple images. Doubly lensed CASTLES-like quasars are the most promising systems to constrain the IMF and the mass components for a sample of lenses.Comment: 14 pages, 12 figures, 3 tables, accepted for publication in MNRA

    Data Compression in the Petascale Astronomy Era: a GERLUMPH case study

    Full text link
    As the volume of data grows, astronomers are increasingly faced with choices on what data to keep -- and what to throw away. Recent work evaluating the JPEG2000 (ISO/IEC 15444) standards as a future data format standard in astronomy has shown promising results on observational data. However, there is still a need to evaluate its potential on other type of astronomical data, such as from numerical simulations. GERLUMPH (the GPU-Enabled High Resolution cosmological MicroLensing parameter survey) represents an example of a data intensive project in theoretical astrophysics. In the next phase of processing, the ~27 terabyte GERLUMPH dataset is set to grow by a factor of 100 -- well beyond the current storage capabilities of the supercomputing facility on which it resides. In order to minimise bandwidth usage, file transfer time, and storage space, this work evaluates several data compression techniques. Specifically, we investigate off-the-shelf and custom lossless compression algorithms as well as the lossy JPEG2000 compression format. Results of lossless compression algorithms on GERLUMPH data products show small compression ratios (1.35:1 to 4.69:1 of input file size) varying with the nature of the input data. Our results suggest that JPEG2000 could be suitable for other numerical datasets stored as gridded data or volumetric data. When approaching lossy data compression, one should keep in mind the intended purposes of the data to be compressed, and evaluate the effect of the loss on future analysis. In our case study, lossy compression and a high compression ratio do not significantly compromise the intended use of the data for constraining quasar source profiles from cosmological microlensing.Comment: 15 pages, 9 figures, 5 tables. Published in the Special Issue of Astronomy & Computing on The future of astronomical data format

    Quasar microlensing light-curve analysis using deep machine learning

    Get PDF
    We introduce a deep machine learning approach to studying quasar microlensing light curves for the first time by analysing hundreds of thousands of simulated light curves with respect to the accretion disc size and temperature profile. Our results indicate that it is possible to successfully classify very large numbers of diverse light-curve data and measure the accretion disc structure. The detailed shape of the accretion disc brightness profile is found to play a negligible role. The speed and efficiency of our deep machine learning approach is ideal for quantifying physical properties in a `big-data' problem set-up. This proposed approach looks promising for analysing decade-long light curves for thousands of microlensed quasars, expected to be provided by the Large Synoptic Survey Telescope

    Modeling lens potentials with continuous neural fields in galaxy-scale strong lenses

    Full text link
    Strong gravitational lensing is a unique observational tool for studying the dark and luminous mass distribution both within and between galaxies. Given the presence of substructures, current strong lensing observations demand more complex mass models than smooth analytical profiles, such as power-law ellipsoids. In this work, we introduce a continuous neural field to predict the lensing potential at any position throughout the image plane, allowing for a nearly model-independent description of the lensing mass. We apply our method on simulated Hubble Space Telescope imaging data containing different types of perturbations to a smooth mass distribution: a localized dark subhalo, a population of subhalos, and an external shear perturbation. Assuming knowledge of the source surface brightness, we use the continuous neural field to model either the perturbations alone or the full lensing potential. In both cases, the resulting model is able to fit the imaging data, and we are able to accurately recover the properties of both the smooth potential and of the perturbations. Unlike many other deep learning methods, ours explicitly retains lensing physics (i.e., the lens equation) and introduces high flexibility in the model only where required, namely, in the lens potential. Moreover, the neural network does not require pre-training on large sets of labelled data and predicts the potential from the single observed lensing image. Our model is implemented in the fully differentiable lens modeling code Herculens
    corecore